22 research outputs found
Intentional binding enhances hybrid BCI control
Mental imagery-based brain-computer interfaces (BCIs) allow to interact with
the external environment by naturally bypassing the musculoskeletal system.
Making BCIs efficient and accurate is paramount to improve the reliability of
real-life and clinical applications, from open-loop device control to
closed-loop neurorehabilitation. By promoting sense of agency and embodiment,
realistic setups including multimodal channels of communication, such as
eye-gaze, and robotic prostheses aim to improve BCI performance. However, how
the mental imagery command should be integrated in those hybrid systems so as
to ensure the best interaction is still poorly understood. To address this
question, we performed a hybrid EEG-based BCI experiment involving healthy
volunteers enrolled in a reach-and-grasp action operated by a robotic arm. Main
results showed that the hand grasping motor imagery timing significantly
affects the BCI accuracy as well as the spatiotemporal brain dynamics. Higher
control accuracy was obtained when motor imagery is performed just after the
robot reaching, as compared to before or during the movement. The proximity
with the subsequent robot grasping favored intentional binding, led to stronger
motor-related brain activity, and primed the ability of sensorimotor areas to
integrate information from regions implicated in higher-order cognitive
functions. Taken together, these findings provided fresh evidence about the
effects of intentional binding on human behavior and cortical network dynamics
that can be exploited to design a new generation of efficient brain-machine
interfaces.Comment: 18 pages, 5 figures, 7 supplementary material
Functional Connectivity for BCI: OpenViBE implementation
Présentations - Session 2International audienc
An OpenViBE Python-based framework for the efficient handling of MI BCI protocols
International audienceA typical Motor Imagery (MI) experimental pipeline is composed of an EEG data acquisition phase, followed by an analysis phase to train a classification algorithm (such as LDA). The training uses features extracted from the acquired data such as spectral power for a subset of sensors and frequencies, in which desynchronization can be observed between the MI tasks. In the final phase of the experiment, the trained classifier is used in an online setup.The feature selection phase is crucial for the optimal functioning of a BCI system, but the large number of parameters can make it a long step of trial-and-error, not acceptable in clinical settings. Training a classification algorithm can be challenging and time consuming as it may include multiple manipulations and datatype conversions with external softwares.Here, we propose a new Python-based framework to manage the whole experimental pipeline smoothly, integrating seamlessly with OpenViBE. We focused our work on feature analysis, selection, and classifier training. An easy-to-use GUI allows to keep track of the multiple acquired EEG signal files, and to process them for analysis and training. Convenient tools allow to compute spectral features and visualize them in the form of statistical R² maps, PSD, scalp topography and ERD/EDS time-frequency maps, for selected sensors or frequency bands, combining trials across multiple runs. Finally, a set of runs can be chosen for training the classification algorithm with only a few clicks and seconds of processing. All signal processing operations use OpenViBE in the background, transparent to the user.This framework has been successfully validated on real EEG data obtained with a Graz MI protocol. It allows the experimenter to identify the underlying brain processes during MI and choose the best combination of features for the subsequent classification. Work is ongoing to add further functionalities, notably functional connectivity
HappyFeat, an interactive and efficient BCI Framework for clinical applications
International audienc
Comparison of strategies to elicit motor imagery-related brain patterns in multimodal BCI settings
International audienceCognitive tasks such as motor imagery (MI) used in Brain Machine Interface (BMI) present many issues: they are demanding, often counter-intuitive, and complex to describe to the subject during the instruction. Engaging feedback related to brain activity are key to maintain the subject involved in the task. We build a framework where the subject controls a robotic arm both by gaze and brain activity in an enriched environment using eye tracking glasses and electro-encephalography (EEG). In our study, we tackle the important question of the preferable moment to perform the MI task in the context of the robotic arm control. To answer this question, we design a protocol where subjects are placed in front of the robotic arm and choose with gaze which object to seize. Then based on stimuli blended in an augmented table, the subjects perform MI or resting state tasks. The stimuli consist of a red (MI) or blue (Resting) dot circling the object to seize. At the end of a MI task, the hand should close. There are three strategies corresponding to three different moments when to perform the mental task, 1) After the robot's movement towards the object, 2) Before the robot's movement, 3) Meanwhile the robot's movement. The experimentation is split into a calibration and two control phases, in the calibration phase, the hand always close during MI task, and in the control phases, it relies on the subject's brain activity. We rely on power spectral density estimate using Burg Auto regressive method to differentiate between MI and resting state in the alpha and beta bands (8-35 Hz). Our method to compare the strategies relies on classification performance (LDA 2 classes) using sensitivity, and statistical differences between conditions (R-squared map). The early results on the first 10 subjects show significant differences between strategy 1 and 2 for offline classification analysis and a trend on the real performance scores in favor of the strategy one. We observed in all subjects' brain activity localized in the motor cortex at significant level with regards to resting state. This indicates that the framework placing the subjects at the center with high sense of agency reinforced by gaze control is giving good results and allows to be more certain that the subject is doing the right task. Taken together, our results indicate that investigating the moment when to perform MI in the framework is a relevant parameter. The strategy where the robot is at the object level when MI is performed seems so far to be the best strategy
Exploring strategies for multimodal BCIs in an enriched environment
International audienceBrain computer interfaces rely on cognitive tasks easy at first sight but that reveal to be complex to perform. In this context, providing engaging feedback and subject's embodiment is one of the keys for the overall system performance. However, noninvasive brain activity alone has been demonstrated to be often insufficient to precisely control all the degrees of freedom of complex external devices such as a robotic arm. Here, we developed a hybrid BCI that also integrates eye-tracking technology to improve the overall sense of agency of the subject. While this solution has been explored before, the best strategy on how to combine gaze and brain activity to obtain effective results has been poorly studied. To address this gap, we explore two different strategies where the timing to perform motor imagery changes; one strategy could be less intuitive compared to the other and this would result in differences of performance
Brain-Computer Interface using OpenViBE, an open-source software platform for Brain-Computer Interfaces [hands-on tutorial]
International audienceOpenViBE is a software platform dedicatedto designing, testing and using brain-computerinterfaces (BCIs). It can be used toacquire, filter, process, classify and visualizebrain signals in real time. During thefirst part of the session, ageneral overviewof the BCI context and of the softwarewill be given. We will also present severalexamples of use of OpenViBE in the BCIdomain. During the second part, we willpropose a step-by-step tutorial in whichparticipants will design a simple motor imagerybased BCI scenario from pre-recordedEEG signals
Brain-Computer Interface using OpenViBE, an open-source software platform for Brain-Computer Interfaces [hands-on tutorial]
International audienceOpenViBE is a software platform dedicatedto designing, testing and using brain-computerinterfaces (BCIs). It can be used toacquire, filter, process, classify and visualizebrain signals in real time. During thefirst part of the session, ageneral overviewof the BCI context and of the softwarewill be given. We will also present severalexamples of use of OpenViBE in the BCIdomain. During the second part, we willpropose a step-by-step tutorial in whichparticipants will design a simple motor imagerybased BCI scenario from pre-recordedEEG signals
HappyFeat, an interactive and efficient BCI Framework for clinical applications
International audienc
Functional Connectivity for BCI: OpenViBE implementation
Présentations - Session 2International audienc